SEISMIC Equity Learning Communities

Level 1 Report

Introduction

This report is the first in a series created for the SEISMIC Equity Learning Communities (SELC) project. The goal of this report, as part of the SELC project, is to allow faculty and department leaders to view institutional data from their courses of interest to evaluate whether students experience equitable outcomes in those courses. This first “Level 1” report will give an overview of the demographic composition of these courses across a range of student identities, show how course grades map onto these demographics and how these patterns have changed over time, and will also compare course grades to students’ other coursework through the concept of grade anomaly. This data is meant to facilitate discussions about how to improve equitable outcomes and elevate pedagogical techniques that might support historically marginalized students.

We want to emphasize that the student identities explored here are not exhaustive. There are many identities that may influence students’ experiences, and thus outcomes, in the classroom, and only some of these identities are collected in registrar-available or quantitative data. Quantitative data is readily-available and allows us to quickly show patterns across student groups or over time. However, aspects of quantitative analysis - such as the binary categorizations, or analyzing statistics aggregated over whole groups - can obscure many diverse and intersectional aspects of an individual student’s experiences. As you read through the report, remember that each data point represents an individual who deserves the opportunity to succeed in these courses.

The course of interest is: BIO300 The range for this data is: 2016 - 2019

Who is in this course?

Before evaluating student course outcomes, it is important to understand: who takes this course? What demographic and academic groups do students in this course hold?

The plot below shows the percent of students across all terms of the course of interest that hold the following identities: declared a STEM major, transferred from another 2- or 4- year institution, are international students or are permament residents, come from a low socioeconomic family background, are the first in their family to go to college or pursue a Bachelor’s degree (first-generation, or FirstGen), are women, or come from a background historically excluded from science based on ethnicity and/or race (PEER; Asai, 2020).

Note that these identities are not mutually exclusive; a single student can belong to any or all of these groups.

Persons historically excluded based on ethnicity or race (PEER; Asai 2020) is a category that includes Black/African-American, Indigenous/Native American, and Hispanic/Latino/a/x students. The definition of PEER closely aligns with the definition of “underrepresented minorities” used by the National Science Foundation (NSF 2023).

However, the different racial and ethnic groups included in the PEER label may have different experiences in the STEM courses. These diverse experiences may be masked by the overall PEER/non-PEER categorization; certain ethnicities may experience more or less equitable outcomes. For this reason, it is also important to disaggregate our data by student ethnicity.

The following plot shows the percent of students in the course of interest (across all terms), broken down by specific racial/ethnicity.

Guiding questions:

  • Do any of the percentages of students in the course surprise you? In what way?
  • What groups are not included in these figures?
  • Given the demographic breakdown of your course, are there any groups you would be specifically interested in examining course outcomes for?
  • Do you think it could be useful to show the demographic breakdown to your students? How might you present this data to your students?

How have these demographics changed over time?

While overall percentages are informative, we can also examine trends in representation of these different student groups across course terms.

The below plot shows the percent of students in each term of the course of interest across the range of terms in the dataset. Student identities are shown in different colored lines over time. Individual terms (winter, spring, summer or fall) are labeled on the x-axis.

Guiding questions:

  • What groups, if any, are growing in the classroom in recent years?
  • Are there any yearly patterns or differences between summer terms and main terms?
  • Are there any change to the classroom course content or structure that could better meet the needs or specific challenges of these groups?

What majors do students in this course hold?

In addition to demographic factors, student majors also provide information about students’ course background, interest, and potentially familiarity with the material in our courses.

The below plot shows all students across all terms, broken down by major of study. Each bar shows the percent of students in each major. The color of the bar indicates whether the major is a STEM major, or a non-STEM major. (The overall percent of STEM majors in the course is shown in the bar plot above with the other student identities.)

Additionally, we may only be interested in a few majors, such as the major of the department of the course. If specified, the plot below shows the percent of majors of interest in this course, compared to all other majors.

The majors of interest are: Biological Sciences Chemistry

Grades in this course

Student final grades are the main course outcome available from registrar data. In this section, we examine the distribution of grades in this courses, and explore grade patterns across groups.

What is the grade distribution in this course?

The below plot shows the overall grade distribution for this course across all terms. Note that the histogram shows course grade numerically, in the scale that grades can range from at your institution (at most, this is from 0 to 4, with whole numbers representing letter grades, and decimals to either side the plus/minus grades).

Guiding questions:

  • Does this grade distribution match what you would expect to see for this course? What about what you would like to see?

  • If the distribution does not match what you would like to see, how would you alter your assessments or pedagogical structures to better achieve that distribution?

  • Does this course have a mandatory distribution requirement? If so, is this historical distribution consistent with it?

  • If you do have a required distribution, are your assessments adequately gauging learning outcomes, so that the final grades mirror the degree of student learning?

  • How well do grades represent student success in this course? What other measures of student outcomes could you use?

How do course outcomes compare for different student groups?

The overall grade distribution gives us a sense for grade patterns in this course. However, from an equity perspective, we want to examine and compare the distributions for different student identities.

The below plot shows the full distribution of the grades for multiple student identities, both academic (STEM major, transfer students) and demographic (PEER, women, low income, first-generation, international). The distribution is shown as a density plot with the majority (25th - 75th percentiles) shaded in a darker color. This region represents the “box” portion of a box plot. Below each distribution, points represent means along with the 95% confidence interval as errorbars around the point. If 95% confidence intervals do not overlap between two groups, the means are statistically significantly different. Medians are shown as a vertical line. The dashed line shows the overall mean grade across all students (across all terms in the dataset).

While viewing the full distribution of grades allows us to view the full range, sometimes comparing average grades, especially between majority and historically-excluded / minoritized groups can be informative when evaluating equity in course outcomes.

The below plot shows only the means (points) and 95% confidence intervals (errorbars) for students that hold the listed identity (triangles, in the historically excluded/minoritized group) and students who do not hold that identity (circles, students in the majority). The size of the point corresponds to the sample size. The x-axis range has been zoomed in so we can better evaluate differences between groups.

Note that it is more useful to compare between students who do and do not hold each identity than across groups, as the student identities are not mutually exclusive (a single student may belong to any and all of these identities).

As discussed previously, the “PEER” category can mask the nuanced different experiences of students of specific ethnic or racial groups. To explore the differences in course grades, mean course grades by ethnicities are disaggregted below.

Once again, the plot shows means and 95% confidence intervals. The size of the point corresponds to the sample size across all terms of the course. The dashed line represents the overall average grade across all students (in all terms in the dataset).

Guiding questions:

  • Is this course supporting students differently? Why might this course support some students better than others?

  • For what students is this course working well for?

  • What additional information about this course or these students would you want to collect to answer these questions?

How have grades across student groups changed over time?

We can also evaluate how grading and grade distributions have changed over time in this course.

Below, we can see general trends in grade distribution over time, where the distribution of grades is shown as a density plot. The color of the line represents the term.

The below plot shows the trends over time for each student identity (each identity has its own separate plot), with points representing mean grades for all students of that identity in that term, and errorbars are the 95% confidence interval. The majoritarian group (i.e., students who do not hold the indicated identity) is shown in gray across all plots. The overall course average, across all students and all terms, is shown with a dashed line.

Guiding questions:

  • Have the overall trends in course grades been stable over time, or have they changed in recent terms?

  • If there are any overall equity gaps associated with specific student identities, have they narrowed in recent years, or widened? Are there any pedagogical or structural changes that may relate to any changes seen over time?

Comparing course grades to grades in other courses

How does overall prior performance at the university (i.e., GPAO, prior GPA) compare to student outcomes in my course?

One way we can compare outcomes in our course to students’ prior performance at our institution is by calculating grade anomaly. Grade anomaly subtracts a measure of student’s general or prior academic performance, such as prior GPA, from their course grade.

SEISMIC projects often use grade point average omitting the course of interest (GPAO), because this metric has been found to be the best metric of prior academic preparation in previous studies (compared to high school GPA or standardized test scores, Koester et al.,2016). GPAO can also be especially useful for transfer students, who may not have a prior GPA at our institution if they just transferred.

Grade anomalies can be:

  • Grade penalties - where students receive lower grades in the course relative to their other coursework, or,

  • Grade bonuses - where students receive higher grades relative to other courses.

Whether a course confers a grade bonus or a grade penalty depends on what other courses students have taken, or take in the term of interest. In many ways, the grade anomaly depends on where the course is situated in the curricular context (i.e., is this course taken along wiht many other large introductory STEM courses, or are the other courses taken to this point general education or electives?).

Importantly, grade bonuses or penalties are not necessarily “good” or “bad”, and it often depends on the goals of the course and the assessment structure. For example, courses that aim to evaluate students’ level of mastery of specific skills or concepts as prerequisites for future coursework (a goal of many introductory STEM courses) tend to confer grade penalties than other courses.

We can also observe these trends over time.

Guiding questions:

  • Does examining grade anomaly give further information on equitable outcomes than course grades alone?
  • Are students receiving a grade bonus or a grade penalty on average in this course? (see dashed line for overall course average)
  • Are there any marginalized demographic groups that are receiving more of a grade penalty or more of a grade bonus? Reflecting on the course structure and its place in the curriculum context, can you think of any reasons that might be?

Victoria has made changes up to this point as of 4/30/2023

Examing the impact of students’ systemic advantages

How do course outcomes differ across the spectrum of systemic advantages students may have access to in higher education?

How can we view the cumulative effect of systemic advantages students may have access to (conferred by demographic identities) overall in the context of a course?

SAI

SAI, or the Systemic Advantage Index is one approach. This metric takes into account multiple axes of student identities, including:

  • race/ethnicity

  • gender

  • socioeconomic status

  • parental education (i.e., first-generation college-going status)

SAI adds together all of the advantages a student may have across these four demographic axes and assigns an index. The table below shows SAI for multiple combinations of identities. For instance, a Latino, first-generation, low income man (column second from left) would be considered to have SAI = 1, while a white, continuing-generation woman from a high income family would be considered to have SAI = 3 (column second from right).


💭 Can you think of any other systemic advantages not included in this index that likely influence student’s outcomes in a course? What would be the challenges to adding those identities or advantages to this index?


How do course grades and grade anomalies compare across levels of SAI?

The below plots show raw grades and grade anomaly partitioned by the number of systemic advantages students have access to.

Guiding questions:

  • Are different levels of SAI significantly different from each other in terms of course outcomes (i.e., do 95% confidence intervals overlap)? Is there a linear relationship between SAI and grade, or is the relationship more mixed? Is there a point where students fall “below” average grades or grade anomalies? How can we better serve those students in this course?

  • If we see differences across levels of SAI, what elements of the course structure might lead to some of these differences with student systemic advantages?

Acknowledgements

This report is heavily inspired by the Foundational Course Initiative reports from the University of Michigan, created by Dr. Heather Rypkema.